33 research outputs found

    La suffisance du capital chez les caisses populaires: une affaire de saine gestion

    Get PDF
    L’objectif de cette étude est d’analyser le problème de la capitalisation chez les caisses populaires. L’insuffisance du capital a été reliée à plusieurs variables mesurant la taille de la caisse, les effets de clientèle, le volume d'affaires, le rendement sur actif versus celui des capitaux propres, la rentabilité, le risque de crédit et de désappariement et la structure du bilan de la caisse. Les analyses statistiques conduites sur 200 caisses populaires ont permis de constater qu’une caisse en insuffisance de capital souffre d’un problème de gestion qui se reflète au niveau de son capital

    Curriculum Method Grounded on Didactic Engineering to Expertise Physical Education Program Proposal

    Get PDF
    The purpose of this article is to identify and understand the reasons of actors of the neosphere and those of the scientific literature for the implementation of the pole vault practice in physical education program. The case of a didactic transposition for teaching pole vaulting practice at school is presented and the relationships between teachers and learners of this discipline are identified. Epistemological analyses of the practice of pole vaulting as well as a transpositive study of its implementation were useful to curry on the practice framework of the discipline. This background allowed quantitative and qualitative analyses for professional learning of pole vaulting. These allowed identifying global and generic trends of adapted and integrated study sections. Reports of crossing and separation between the practice of pole vaulting and the learner’s skills were highlighted. Difficulties related to contextual factors were identified. The synthetic didactical training analysis led to propose practical teaching programs for pole vaulting at school. Keywords: Curriculum, Physical Education and Sports, Pole Vaulting, Knowledge Taught, Learn to Teach

    Teaching Strategies to Enhance Motor Skills Learning for Groups of Students: The Effects of Verbal and Visual Feedback on Performance in Pole Vault Practice

    Get PDF
    This paper contributes to enhancing a comparative study on a creative conduct of teaching method in physical education sessions. We search for determine the evolution of teacher representations related to the understanding of the particular practice of pole vaulting and its teaching by integrating a computer-video artifact into an educational environment. This study was done in two graduate classes of high school including (N = 44) students. These are two groups; aged between 15 and 18 (One group of students was provided with a verbal and gestural communication, while the other was asked to view an artifact video). The pole vole learning cycle consisted of 14 sessions, divided between two sessions per week. The findings were consistent with the difference between methods of teaching in line with tasks of learning. Through this experiment and lessons feedbacks, we have been able to conclude that the use of the Computer Communication Technology and precisely of the video image artifact, was source of progression of the motor learning for the large majority of the students from the all group study. This research is opened to understand and appreciate how to best design video lectures that encourage learning and developing knowledge. Keywords: physical education, artifact video, feedback, motor learning, skills DOI: 10.7176/JEP/10-7-06 Publication date:March 31st 201

    Theory and Practice of Online Learning

    Get PDF
    Edited book by Athabasca authors, describing production, delivery, theory and support for online learnin

    Olive growing in arid area: further challenges from climate change

    Get PDF
    In Tunisia, agriculture is vulnerable to climate change with harmful impacts subsequent warming and drying trends. In these regions, olive industry plays a key role at regional and national level. Therefore, the identification of adapted olive tree genotypes has become an urgent need to develop sustainable agriculture in arid lands. This study aimed to evaluate the impact of climatic variation on the olive growing systems in arid and sub-arid areas of Tunisia. The phenological behavior of Chemlali and Zalmati, main olive cultivars wide spreading in central and southern Tunisia, respectively, were considered to evaluate their capacity to adapt to contrasting climatic conditions. Over the 2005-2019 period, olive cultivars presented variable flowering dates related to local climatic conditions. Zalmati cultivar bloom in Zarzis seems to be earlier than Chemlali cultivar in Sfax region with average flowering dates of DOY-92 (April-3) and DOY-106 (April-17), respectively. A tendency for advancing the growing season was observed with warmer winters, which leads to disruption of pollination, high risk of insect attack, and consequently harmful effects on the production and product quality. This investigation serves as a basis for making recommendations taking into account the production areas as well as for addressing projected climate change

    Systematic Bias in Genomic Classification Due to Contaminating Non-neoplastic Tissue in Breast Tumor Samples

    Get PDF
    Abstract Background Genomic tests are available to predict breast cancer recurrence and to guide clinical decision making. These predictors provide recurrence risk scores along with a measure of uncertainty, usually a confidence interval. The confidence interval conveys random error and not systematic bias. Standard tumor sampling methods make this problematic, as it is common to have a substantial proportion (typically 30-50%) of a tumor sample comprised of histologically benign tissue. This "normal" tissue could represent a source of non-random error or systematic bias in genomic classification. Methods To assess the performance characteristics of genomic classification to systematic error from normal contamination, we collected 55 tumor samples and paired tumor-adjacent normal tissue. Using genomic signatures from the tumor and paired normal, we evaluated how increasing normal contamination altered recurrence risk scores for various genomic predictors. Results Simulations of normal tissue contamination caused misclassification of tumors in all predictors evaluated, but different breast cancer predictors showed different types of vulnerability to normal tissue bias. While two predictors had unpredictable direction of bias (either higher or lower risk of relapse resulted from normal contamination), one signature showed predictable direction of normal tissue effects. Due to this predictable direction of effect, this signature (the PAM50) was adjusted for normal tissue contamination and these corrections improved sensitivity and negative predictive value. For all three assays quality control standards and/or appropriate bias adjustment strategies can be used to improve assay reliability. Conclusions Normal tissue sampled concurrently with tumor is an important source of bias in breast genomic predictors. All genomic predictors show some sensitivity to normal tissue contamination and ideal strategies for mitigating this bias vary depending upon the particular genes and computational methods used in the predictor

    SEARCHPATTOOL: a new method for mining the most specific frequent patterns for binding sites with application to prokaryotic DNA sequences

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Computational methods to predict transcription factor binding sites (TFBS) based on exhaustive algorithms are guaranteed to find the best patterns but are often limited to short ones or impose some constraints on the pattern type. Many patterns for binding sites in prokaryotic species are not well characterized but are known to be large, between 16–30 base pairs (bp) and contain at least 2 conserved bases. The length of prokaryotic species promoters (about 400 bp) and our interest in studying a small set of genes that could be a cluster of co-regulated genes from microarray experiments led to the development of a new exhaustive algorithm targeting these large patterns.</p> <p>Results</p> <p>We present Searchpattool, a new method to search for and select the most specific (conservative) frequent patterns. This method does not impose restrictions on the density or the structure of the pattern. The best patterns (motifs) are selected using several statistics, including a new application of a z-score based on the number of matching sequences. We compared Searchpattool against other well known algorithms on a <it>Bacillus subtilis </it>group of 14 input sequences and found that in our experiments Searchpattool always performed the best based on performance scores.</p> <p>Conclusion</p> <p>Searchpattool is a new method for pattern discovery relative to transcription factor binding sites for species or genes with short promoters. It outputs the most specific significant patterns and helps the biologist to choose the best candidates.</p

    Systematic Bias in Genomic Classification Due to Contaminating Non-neoplastic Tissue in Breast Tumor Samples

    Get PDF
    Abstract Background Genomic tests are available to predict breast cancer recurrence and to guide clinical decision making. These predictors provide recurrence risk scores along with a measure of uncertainty, usually a confidence interval. The confidence interval conveys random error and not systematic bias. Standard tumor sampling methods make this problematic, as it is common to have a substantial proportion (typically 30-50%) of a tumor sample comprised of histologically benign tissue. This "normal" tissue could represent a source of non-random error or systematic bias in genomic classification. Methods To assess the performance characteristics of genomic classification to systematic error from normal contamination, we collected 55 tumor samples and paired tumor-adjacent normal tissue. Using genomic signatures from the tumor and paired normal, we evaluated how increasing normal contamination altered recurrence risk scores for various genomic predictors. Results Simulations of normal tissue contamination caused misclassification of tumors in all predictors evaluated, but different breast cancer predictors showed different types of vulnerability to normal tissue bias. While two predictors had unpredictable direction of bias (either higher or lower risk of relapse resulted from normal contamination), one signature showed predictable direction of normal tissue effects. Due to this predictable direction of effect, this signature (the PAM50) was adjusted for normal tissue contamination and these corrections improved sensitivity and negative predictive value. For all three assays quality control standards and/or appropriate bias adjustment strategies can be used to improve assay reliability. Conclusions Normal tissue sampled concurrently with tumor is an important source of bias in breast genomic predictors. All genomic predictors show some sensitivity to normal tissue contamination and ideal strategies for mitigating this bias vary depending upon the particular genes and computational methods used in the predictor

    A comparison of machine learning algorithms for chemical toxicity classification using a simulated multi-scale data model

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Bioactivity profiling using high-throughput <it>in vitro </it>assays can reduce the cost and time required for toxicological screening of environmental chemicals and can also reduce the need for animal testing. Several public efforts are aimed at discovering patterns or classifiers in high-dimensional bioactivity space that predict tissue, organ or whole animal toxicological endpoints. Supervised machine learning is a powerful approach to discover combinatorial relationships in complex <it>in vitro/in vivo </it>datasets. We present a novel model to simulate complex chemical-toxicology data sets and use this model to evaluate the relative performance of different machine learning (ML) methods.</p> <p>Results</p> <p>The classification performance of Artificial Neural Networks (ANN), K-Nearest Neighbors (KNN), Linear Discriminant Analysis (LDA), Naïve Bayes (NB), Recursive Partitioning and Regression Trees (RPART), and Support Vector Machines (SVM) in the presence and absence of filter-based feature selection was analyzed using K-way cross-validation testing and independent validation on simulated <it>in vitro </it>assay data sets with varying levels of model complexity, number of irrelevant features and measurement noise. While the prediction accuracy of all ML methods decreased as non-causal (irrelevant) features were added, some ML methods performed better than others. In the limit of using a large number of features, ANN and SVM were always in the top performing set of methods while RPART and KNN (k = 5) were always in the poorest performing set. The addition of measurement noise and irrelevant features decreased the classification accuracy of all ML methods, with LDA suffering the greatest performance degradation. LDA performance is especially sensitive to the use of feature selection. Filter-based feature selection generally improved performance, most strikingly for LDA.</p> <p>Conclusion</p> <p>We have developed a novel simulation model to evaluate machine learning methods for the analysis of data sets in which in vitro bioassay data is being used to predict in vivo chemical toxicology. From our analysis, we can recommend that several ML methods, most notably SVM and ANN, are good candidates for use in real world applications in this area.</p

    Révision du chapitre 3480 du manuel de l 'I.C.C.A. : Effets de la nouvelle définition des éléments extraordinaires sur la valeur prédictive du bénéfice d’exploitation

    No full text
    Cette étude examine l'effet de la révision du chapitre 3480 du Manuel de l'I.C.C.A. relatif aux éléments extraordinaires sur la valeur prédictive du bénéfice avant éléments extraordinaires. Le choix du critère de la valeur prédictive repose sur le fait que l'I.C.C.A. reconnaît qu'il s'agit ici d'une caractéristique essentielle à l'obtention d'une information comptable utile. Ainsi, le resserrement de la définition des éléments extraordinaires par l'ajout d'un troisième critère a contribué à une réduction du nombre de ces éléments dans les états financiers des entreprises canadiennes. En effet, la description de l'échantillon a montré que plusieurs catégories d'éléments extraordinaires considérées avant la révision de la norme canadienne dans la partie non courante de l'état des résultats sont déplacées, par l'effet de la révision, à la partie courante de cet état. L'examen des éléments inhabituels après la révision a permis de retrouver la majorité des éléments qui auraient pus être qualifiés d'extraordinaires si on avait appliqué l'ancienne définition. Les éléments qui ne satisfont pas les trois critères sont, en fait, considérés dans le calcul du bénéfice mais sont maintenant présentés dans la partie courante de l'état des résultats. Le bénéfice avant éléments extraordinaires incluera dorénavant un plus grand nombre d'éléments non récurrents. Par conséquent, il est possible que l'ajout du troisième critère à la définition des éléments extraordinaires entraine une diminution de la valeur prédictive du bénéfice avant éléments extraordinaires. La période d'analyse s'étale sur les six années qui entourent la révision, soit trois ans avant la révision (1987, 1988 et 1989) et trois ans après la révision (1990, 1991 et 1992). Le modèle auto régressif de premier ordre a été retenu pour la prédiction des bénéfices par action avant éléments extraordinaires de chaque période. Les résultats obtenus suggèrent que la valeur prédictive du bénéfice par action avant éléments extraordinaires a diminué à la suite de la révision du chapitre 3480 du Manuel. Il ressort de cette étude que les débats sur la manière dont les éléments extraordinaires devraient être définis et présentés dans les états des résultats des entreprises canadiennes ne sont pas résolus par cette révision
    corecore